86 resultados para asymptotic suboptimality
Resumo:
We consider the small-time behavior of interfaces of zero contact angle solutions to the thin-film equation. For a certain class of initial data, through asymptotic analyses, we deduce a wide variety of behavior for the free boundary point. These are supported by extensive numerical simulations. © 2007 Society for Industrial and Applied Mathematics
Resumo:
We present an application of birth-and-death processes on configuration spaces to a generalized mutation4 selection balance model. The model describes the aging of population as a process of accumulation of mu5 tations in a genotype. A rigorous treatment demands that mutations correspond to points in abstract spaces. 6 Our model describes an infinite-population, infinite-sites model in continuum. The dynamical equation which 7 describes the system, is of Kimura-Maruyama type. The problem can be posed in terms of evolution of states 8 (differential equation) or, equivalently, represented in terms of Feynman-Kac formula. The questions of interest 9 are the existence of a solution, its asymptotic behavior, and properties of the limiting state. In the non-epistatic 10 case the problem was posed and solved in [Steinsaltz D., Evans S.N., Wachter K.W., Adv. Appl. Math., 2005, 11 35(1)]. In our model we consider a topological space X as the space of positions of mutations and the influence of epistatic potentials
Resumo:
In this article we review recent progress on the design, analysis and implementation of numerical-asymptotic boundary integral methods for the computation of frequency-domain acoustic scattering in a homogeneous unbounded medium by a bounded obstacle. The main aim of the methods is to allow computation of scattering at arbitrarily high frequency with finite computational resources.
Resumo:
We consider the problem of determining the pressure and velocity fields for a weakly compressible fluid flowing in a two-dimensional reservoir in an inhomogeneous, anisotropic porous medium, with vertical side walls and variable upper and lower boundaries, in the presence of vertical wells injecting or extracting fluid. Numerical solution of this problem may be expensive, particularly in the case that the depth scale of the layer h is small compared to the horizontal length scale l. This is a situation which occurs frequently in the application to oil reservoir recovery. Under the assumption that epsilon=h/l<<1, we show that the pressure field varies only in the horizontal direction away from the wells (the outer region). We construct two-term asymptotic expansions in epsilon in both the inner (near the wells) and outer regions and use the asymptotic matching principle to derive analytical expressions for all significant process quantities. This approach, via the method of matched asymptotic expansions, takes advantage of the small aspect ratio of the reservoir, epsilon, at precisely the stage where full numerical computations become stiff, and also reveals the detailed structure of the dynamics of the flow, both in the neighborhood of wells and away from wells.
Resumo:
We give a non-commutative generalization of classical symbolic coding in the presence of a synchronizing word. This is done by a scattering theoretical approach. Classically, the existence of a synchronizing word turns out to be equivalent to asymptotic completeness of the corresponding Markov process. A criterion for asymptotic completeness in general is provided by the regularity of an associated extended transition operator. Commutative and non-commutative examples are analysed.
Resumo:
We develop the linearization of a semi-implicit semi-Lagrangian model of the one-dimensional shallow-water equations using two different methods. The usual tangent linear model, formed by linearizing the discrete nonlinear model, is compared with a model formed by first linearizing the continuous nonlinear equations and then discretizing. Both models are shown to perform equally well for finite perturbations. However, the asymptotic behaviour of the two models differs as the perturbation size is reduced. This leads to difficulties in showing that the models are correctly coded using the standard tests. To overcome this difficulty we propose a new method for testing linear models, which we demonstrate both theoretically and numerically. © Crown copyright, 2003. Royal Meteorological Society
Resumo:
While over-dispersion in capture–recapture studies is well known to lead to poor estimation of population size, current diagnostic tools to detect the presence of heterogeneity have not been specifically developed for capture–recapture studies. To address this, a simple and efficient method of testing for over-dispersion in zero-truncated count data is developed and evaluated. The proposed method generalizes an over-dispersion test previously suggested for un-truncated count data and may also be used for testing residual over-dispersion in zero-inflation data. Simulations suggest that the asymptotic distribution of the test statistic is standard normal and that this approximation is also reasonable for small sample sizes. The method is also shown to be more efficient than an existing test for over-dispersion adapted for the capture–recapture setting. Studies with zero-truncated and zero-inflated count data are used to illustrate the test procedures.
Resumo:
Two simple and frequently used capture–recapture estimates of the population size are compared: Chao's lower-bound estimate and Zelterman's estimate allowing for contaminated distributions. In the Poisson case it is shown that if there are only counts of ones and twos, the estimator of Zelterman is always bounded above by Chao's estimator. If counts larger than two exist, the estimator of Zelterman is becoming larger than that of Chao's, if only the ratio of the frequencies of counts of twos and ones is small enough. A similar analysis is provided for the binomial case. For a two-component mixture of Poisson distributions the asymptotic bias of both estimators is derived and it is shown that the Zelterman estimator can experience large overestimation bias. A modified Zelterman estimator is suggested and also the bias-corrected version of Chao's estimator is considered. All four estimators are compared in a simulation study.
Resumo:
Reducing carbon conversion of ruminally degraded feed into methane increases feed efficiency and reduces emission of this potent greenhouse gas into the environment. Accurate, yet simple, predictions of methane production of ruminants on any feeding regime are important in the nutrition of ruminants, and in modeling methane produced by them. The current work investigated feed intake, digestibility and methane production by open-circuit respiration measurements in sheep fed 15 untreated, sodium hydroxide (NaOH) treated and anhydrous ammonia (NH3) treated wheat, barley and oat straws. In vitro fermentation characteristics of straws were obtained from incubations using the Hohenheim gas production system that measured gas production, true substrate degradability, short-chain fatty acid production and efficiency of microbial production from the ratio of truly degraded substrate to gas volume. In the 15 straws, organic matter (OM) intake and in vivo OM digestibility ranged from 563 to 1201 g and from 0.464 to 0.643, respectively. Total daily methane production ranged from 13.0 to 34.4 l, whereas methane produced/kg OM matter apparently digested in vivo varied from 35.0 to 61.8 l. The OM intake was positively related to total methane production (R2 = 0.81, P<0.0001), and in vivo OM digestibility was also positively associated with methane production (R2 = 0.67, P<0.001), but negatively associated with methane production/kg digestible OM intake (R2 = 0.61, P<0.001). In the in vitro incubations of the 15 straws, the ratio of acetate to propionate ranged from 2.3 to 2.8 (P<0.05) and efficiencies of microbial production ranged from 0.21 to 0.37 (P<0.05) at half asymptotic gas production. Total daily methane production, calculated from in vitro fermentation characteristics (i.e., true degradability, SCFA ratio and efficiency of microbial production) and OM intake, compared well with methane measured in the open-circuit respiration chamber (y = 2.5 + 0.86x, R2 = 0.89, P<0.0001, Sy.x = 2.3). Methane production from forage fed ruminants can be predicted accurately by simple in vitro incubations combining true substrate degradability and gas volume measurements, if feed intake is known.
Resumo:
The objective of the present study was to determine the optimum plant density of four pigeonpea genotypes, representing early, medium and late maturing types, grown in five contrasting environments in Tanzania. ICPL 86005 (early), Kat 50/3 and QP 37 (medium) and Local (late) were grown at four plant densities (40 000-320 000 plants/ha) in irrigated and rainfed conditions at Ilonga and under rainfed conditions at Kibaha, Selian and Ismani. At maturity, total above-ground biomass and seed yield (SY) were measured. The highest yields were obtained in the irrigated experiment at Ilonga, where the medium/late genotypes produced 25 t biomass/ha and 5 center dot 6 t seed/ha. The lowest SY were at Kibaha, 0 58 to 1 center dot 76 t/ha, where a severe drought occurred. In nearly all cases the response to density was linear or asymptotic. The response of ICPL 86005 was significantly different from the other three genotypes. The optimum density for SY varied from 37 000 to 227 000 plants/ha in ICPL 86005, compared with 3000 to 101000 plants/ha in the medium/late genotypes. The highest optimum density was at Selian and Ismani and the lowest at Ilonga and Kibaha, where drought occurred. Optimum densities therefore varied greatly with genotype (duration) and environment, and this variation needs to be considered when planning trials.
Resumo:
1. The feeding rates of many predators and parasitoids exhibit type II functional responses, with a decelerating rate of increase to reach an asymptotic value as the density of their prey or hosts increases. Holling's disc equation describes such relationships and predicts that the asymptotic feeding rate at high prey densities is set by handling time, while the rate at which feeding rate increases with increased prey density is determined by searching efficiency. Searching efficiency and handling time are also parameters in other models which describe the functional response. Models which incorporate functional responses in order to make predictions of the effects of food shortage thus rely upon a clear understanding and accurate quantification of searching efficiency and handling time. 2. Blackbird Turdus merula exhibit a type II functional response and use pause-travel foraging, a foraging technique in which animals search for prey while stationary and then move to capture prey. Pause-travel foraging allows accurate direct measurement of feeding rate and both searching efficiency and handling time. We use Blackbirds as a model species to: (i) compare observed measures of both searching efficiency and handling time with those estimated by statistically fitting the disc equation to the observed functional response; and (ii) investigate alternative measures of searching efficiency derived by the established method where search area is assumed to be circular and a new method that we propose where it is not. 3. We find that the disc equation can adequately explain the functional response of blackbirds feeding on artificial prey. However, this depends critically upon how searching efficiency is measured. Two variations on the previous method of measuring search area (a component of searching efficiency) overestimated searching efficiency, and hence predicted feeding rates higher than those observed. Two variations of our alternative approach produced lower estimates of searching efficiency, closer to that estimated by fitting the disc equation, and hence more accurately predicted feeding rate. Our study shows the limitations of the previous method of measuring searching efficiency, and describes a new method for measuring searching efficiency more accurately.
Resumo:
A study was conducted to estimate variation among laboratories and between manual and automated techniques of measuring pressure on the resulting gas production profiles (GPP). Eight feeds (molassed sugarbeet feed, grass silage, maize silage, soyabean hulls, maize gluten feed, whole crop wheat silage, wheat, glucose) were milled to pass a I mm screen and sent to three laboratories (ADAS Nutritional Sciences Research Unit, UK; Institute of Grassland and Environmental Research (IGER), UK; Wageningen University, The Netherlands). Each laboratory measured GPP over 144 h using standardised procedures with manual pressure transducers (MPT) and automated pressure systems (APS). The APS at ADAS used a pressure transducer and bottles in a shaking water bath, while the APS at Wageningen and IGER used a pressure sensor and bottles held in a stationary rack. Apparent dry matter degradability (ADDM) was estimated at the end of the incubation. GPP were fitted to a modified Michaelis-Menten model assuming a single phase of gas production, and GPP were described in terms of the asymptotic volume of gas produced (A), the time to half A (B), the time of maximum gas production rate (t(RM) (gas)) and maximum gas production rate (R-M (gas)). There were effects (P<0.001) of substrate on all parameters. However, MPT produced more (P<0.001) gas, but with longer (P<0.001) B and t(RM gas) (P<0.05) and lower (P<0.001) R-M gas compared to APS. There was no difference between apparatus in ADDM estimates. Interactions occurred between substrate and apparatus, substrate and laboratory, and laboratory and apparatus. However, when mean values for MPT were regressed from the individual laboratories, relationships were good (i.e., adjusted R-2 = 0.827 or higher). Good relationships were also observed with APS, although they were weaker than for MPT (i.e., adjusted R-2 = 0.723 or higher). The relationships between mean MPT and mean APS data were also good (i.e., adjusted R 2 = 0. 844 or higher). Data suggest that, although laboratory and method of measuring pressure are sources of variation in GPP estimation, it should be possible using appropriate mathematical models to standardise data among laboratories so that data from one laboratory could be extrapolated to others. This would allow development of a database of GPP data from many diverse feeds. (c) 2005 Published by Elsevier B.V.
Resumo:
A number of authors have proposed clinical trial designs involving the comparison of several experimental treatments with a control treatment in two or more stages. At the end of the first stage, the most promising experimental treatment is selected, and all other experimental treatments are dropped from the trial. Provided it is good enough, the selected experimental treatment is then compared with the control treatment in one or more subsequent stages. The analysis of data from such a trial is problematic because of the treatment selection and the possibility of stopping at interim analyses. These aspects lead to bias in the maximum-likelihood estimate of the advantage of the selected experimental treatment over the control and to inaccurate coverage for the associated confidence interval. In this paper, we evaluate the bias of the maximum-likelihood estimate and propose a bias-adjusted estimate. We also propose an approach to the construction of a confidence region for the vector of advantages of the experimental treatments over the control based on an ordering of the sample space. These regions are shown to have accurate coverage, although they are also shown to be necessarily unbounded. Confidence intervals for the advantage of the selected treatment are obtained from the confidence regions and are shown to have more accurate coverage than the standard confidence interval based upon the maximum-likelihood estimate and its asymptotic standard error.
Resumo:
Sequential methods provide a formal framework by which clinical trial data can be monitored as they accumulate. The results from interim analyses can be used either to modify the design of the remainder of the trial or to stop the trial as soon as sufficient evidence of either the presence or absence of a treatment effect is available. The circumstances under which the trial will be stopped with a claim of superiority for the experimental treatment, must, however, be determined in advance so as to control the overall type I error rate. One approach to calculating the stopping rule is the group-sequential method. A relatively recent alternative to group-sequential approaches is the adaptive design method. This latter approach provides considerable flexibility in changes to the design of a clinical trial at an interim point. However, a criticism is that the method by which evidence from different parts of the trial is combined means that a final comparison of treatments is not based on a sufficient statistic for the treatment difference, suggesting that the method may lack power. The aim of this paper is to compare two adaptive design approaches with the group-sequential approach. We first compare the form of the stopping boundaries obtained using the different methods. We then focus on a comparison of the power of the different trials when they are designed so as to be as similar as possible. We conclude that all methods acceptably control type I error rate and power when the sample size is modified based on a variance estimate, provided no interim analysis is so small that the asymptotic properties of the test statistic no longer hold. In the latter case, the group-sequential approach is to be preferred. Provided that asymptotic assumptions hold, the adaptive design approaches control the type I error rate even if the sample size is adjusted on the basis of an estimate of the treatment effect, showing that the adaptive designs allow more modifications than the group-sequential method.
Resumo:
The control of fishing mortality via fishing effort remains fundamental to most fisheries management strategies even at the local community or co-management level. Decisions to support such strategies require knowledge of the underlying response of the catch to changes in effort. Even under adaptive management strategies, imprecise knowledge of the response is likely to help accelerate the adaptive learning process. Data and institutional capacity requirements to employ multi-species biomass dynamics and age-structured models invariably render their use impractical particularly in less developed regions of the world. Surplus production models fitted to catch and effort data aggregated across all species offer viable alternatives. The current paper seeks models of this type that best describe the multi-species catch–effort responses in floodplain-rivers, lakes and reservoirs and reef-based fisheries based upon among fishery comparisons, building on earlier work. Three alternative surplus production models were fitted to estimates of catch per unit area (CPUA) and fisher density for 258 fisheries in Africa, Asia and South America. In all cases examined, the best or equal best fitting model was the Fox type, explaining up to 90% of the variation in CPUA. For lake and reservoir fisheries in Africa and Asia, the Schaefer and an asymptotic model fitted equally well. The Fox model estimates of fisher density (fishers km−2) at maximum yield (iMY) for floodplain-rivers, African lakes and reservoirs and reef-based fisheries are 13.7 (95% CI [11.8, 16.4]); 27.8 (95% CI [17.5, 66.7]) and 643 (95% CI [459,1075]), respectively and compare well with earlier estimates. Corresponding estimates of maximum yield are also given. The significantly higher value of iMY for reef-based fisheries compared to estimates for rivers and lakes reflects the use of a different measure of fisher density based upon human population size estimates. The models predict that maximum yield is achieved at a higher fishing intensity in Asian lakes compared to those in Africa. This may reflect the common practice in Asia of stocking lakes to augment natural recruitment. Because of the equilibrium assumptions underlying the models, all the estimates of maximum yield and corresponding levels of effort should be treated with caution.