950 resultados para Mean Absolute Scaled Error (MASE)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background and Aims: The objective of the study was to compare data obtained from the Cosmed K4 b2 and the Deltatrac II™ metabolic cart for the purpose of determining the validity of the Cosmed K4 b2 in measuring resting energy expenditure. Methods: Nine adult subjects (four male, five female) were measured. Resting energy expenditure was measured in consecutive sessions using the Cosmed K4 b2, the Deltatrac II™ metabolic cart separately and the Cosmed K4 b2 and Deltatrac II™ metabolic cart simultaneously, performed in random order. Resting energy expenditure (REE) data from both devices were then compared with values obtained from predictive equations. Results: Bland and Altman analysis revealed a mean bias for the four variables, REE, respiratory quotient (RQ), VCO2, VO2 between data obtained from Cosmed K4 b2 and Deltatrac II™ metabolic cart of 268 ± 702 kcal/day, -0.0±0.2, 26.4±118.2 and 51.6±126.5 ml/min, respectively. Corresponding limits of agreement for the same four variables were all large. Also, Bland and Altman analysis revealed a larger mean bias between predicted REE and measured REE using Cosmed K4 b2 data (-194±603 kcal/day) than using Deltatrac™ metabolic cart data (73±197 kcal/day). Conclusions: Variability between the two devices was very high and a degree of measurement error was detected. Data from the Cosmed K4 b2 provided variable results on comparison with predicted values, thus, would seem an invalid device for measuring adults. © 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For point to point multiple input multiple output systems, Dayal-Brehler-Varanasi have proved that training codes achieve the same diversity order as that of the underlying coherent space time block code (STBC) if a simple minimum mean squared error estimate of the channel formed using the training part is employed for coherent detection of the underlying STBC. In this letter, a similar strategy involving a combination of training, channel estimation and detection in conjunction with existing coherent distributed STBCs is proposed for noncoherent communication in Amplify-and-Forward (AF) relay networks. Simulation results show that the proposed simple strategy outperforms distributed differential space-time coding for AF relay networks. Finally, the proposed strategy is extended to asynchronous relay networks using orthogonal frequency division multiplexing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The near-critical behavior of the susceptibility deduced from light-scattering measurements in a ternary liquid mixture of 3-methylpyridine, water, and sodium bromide has been determined. The measurements have been performed in the one-phase region near the lower consolute points of samples with different concentrations of sodium bromide. A crossover from Ising asymptotic behavior to mean-field behavior has been observed. As the concentration of sodium bromide increases, the crossover becomes more pronounced, and the crossover temperature shifts closer to the critical temperature. The data are well described by a model that contains two independent crossover parameters. The crossover of the susceptibility critical exponent γ from its Ising value γ=1.24 to the mean-field value γ=1 is sharp and nonmonotonic. We conclude that there exists an additional length scale in the system due to the presence of the electrolyte which competes with the correlation length of the concentration fluctuations. An analogy with crossover phenomena in polymer solutions and a possible connection with multicritical phenomena is discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I (Manjunath et al., 1994, Chem. Engng Sci. 49, 1451-1463) of this paper showed that the random particle numbers and size distributions in precipitation processes in very small drops obtained by stochastic simulation techniques deviate substantially from the predictions of conventional population balance. The foregoing problem is considered in this paper in terms of a mean field approximation obtained by applying a first-order closure to an unclosed set of mean field equations presented in Part I. The mean field approximation consists of two mutually coupled partial differential equations featuring (i) the probability distribution for residual supersaturation and (ii) the mean number density of particles for each size and supersaturation from which all average properties and fluctuations can be calculated. The mean field equations have been solved by finite difference methods for (i) crystallization and (ii) precipitation of a metal hydroxide both occurring in a single drop of specified initial supersaturation. The results for the average number of particles, average residual supersaturation, the average size distribution, and fluctuations about the average values have been compared with those obtained by stochastic simulation techniques and by population balance. This comparison shows that the mean field predictions are substantially superior to those of population balance as judged by the close proximity of results from the former to those from stochastic simulations. The agreement is excellent for broad initial supersaturations at short times but deteriorates progressively at larger times. For steep initial supersaturation distributions, predictions of the mean field theory are not satisfactory thus calling for higher-order approximations. The merit of the mean field approximation over stochastic simulation lies in its potential to reduce expensive computation times involved in simulation. More effective computational techniques could not only enhance this advantage of the mean field approximation but also make it possible to use higher-order approximations eliminating the constraints under which the stochastic dynamics of the process can be predicted accurately.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report an experimental study of a new type of turbulent flow that is driven purely by buoyancy. The flow is due to an unstable density difference, created using brine and water, across the ends of a long (length/diameter = 9) vertical pipe. The Schmidt number Sc is 670, and the Rayleigh number (Ra) based on the density gradient and diameter is about 10(8). Under these conditions the convection is turbulent, and the time-averaged velocity at any point is `zero'. The Reynolds number based on the Taylor microscale, Re-lambda, is about 65. The pipe is long enough for there to be an axially homogeneous region, with a linear density gradient, about 6-7 diameters long in the midlength of the pipe. In the absence of a mean flow and, therefore, mean shear, turbulence is sustained just by buoyancy. The flow can be thus considered to be an axially homogeneous turbulent natural convection driven by a constant (unstable) density gradient. We characterize the flow using flow visualization and particle image velocimetry (PIV). Measurements show that the mean velocities and the Reynolds shear stresses are zero across the cross-section; the root mean squared (r.m.s.) of the vertical velocity is larger than those of the lateral velocities (by about one and half times at the pipe axis). We identify some features of the turbulent flow using velocity correlation maps and the probability density functions of velocities and velocity differences. The flow away from the wall, affected mainly by buoyancy, consists of vertically moving fluid masses continually colliding and interacting, while the flow near the wall appears similar to that in wall-bound shear-free turbulence. The turbulence is anisotropic, with the anisotropy increasing to large values as the wall is approached. A mixing length model with the diameter of the pipe as the length scale predicts well the scalings for velocity fluctuations and the flux. This model implies that the Nusselt number would scale as (RaSc1/2)-Sc-1/2, and the Reynolds number would scale as (RaSc-1/2)-Sc-1/2. The velocity and the flux measurements appear to be consistent with the Ra-1/2 scaling, although it must be pointed out that the Rayleigh number range was less than 10. The Schmidt number was not varied to check the Sc scaling. The fluxes and the Reynolds numbers obtained in the present configuration are Much higher compared to what would be obtained in Rayleigh-Benard (R-B) convection for similar density differences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Melanopsin containing intrinsically photosensitive Retinal Ganglion cells (ipRGCs) mediate the pupil light reflex (PLR) during light onset and at light offset (the post-illumination pupil response, PIPR). Recent evidence shows that the PLR and PIPR can provide non-invasive, objective markers of age-related retinal and optic nerve disease, however there is no consensus on the effects of healthy ageing or refractive error on the ipRGC mediated pupil function. Here we isolated melanopsin contributions to the pupil control pathway in 59 human participants with no ocular pathology across a range of ages and refractive errors. We show that there is no effect of age or refractive error on ipRGC inputs to the human pupil control pathway. The stability of the ipRGC mediated pupil response across the human lifespan provides a functional correlate of their robustness observed during ageing in rodent models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, a growing amount of attention has been focused on the utility of biosensors for biomedical applications. Combined with nanomaterials and nanostructures, nano-scaled biosensors are installed for biomedical applications, such as pathogenic bacteria monitoring, virus recognition, disease biomarker detection, among others. These nano-biosensors offer a number of advantages and in many respects are ideally suited to biomedical applications, which could be made as extremely flexible devices, allowing biomedical analysis with speediness, excellent selectivity and high sensitivity. This minireview discusses the literature published in the latest years on the advances in biomedical applications of nano-scaled biosensors for disease bio-marking and detection, especially in bio-imaging and the diagnosis of pathological cells and viruses, monitoring pathogenic bacteria, thus providing insight into the future prospects of biosensors in relevant clinical applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantifying the stiffness properties of soft tissues is essential for the diagnosis of many cardiovascular diseases such as atherosclerosis. In these pathologies it is widely agreed that the arterial wall stiffness is an indicator of vulnerability. The present paper focuses on the carotid artery and proposes a new inversion methodology for deriving the stiffness properties of the wall from cine-MRI (magnetic resonance imaging) data. We address this problem by setting-up a cost function defined as the distance between the modeled pixel signals and the measured ones. Minimizing this cost function yields the unknown stiffness properties of both the arterial wall and the surrounding tissues. The sensitivity of the identified properties to various sources of uncertainty is studied. Validation of the method is performed on a rubber phantom. The elastic modulus identified using the developed methodology lies within a mean error of 9.6%. It is then applied to two young healthy subjects as a proof of practical feasibility, with identified values of 625 kPa and 587 kPa for one of the carotid of each subject.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Doppler weather radars with fast scanning rates must estimate spectral moments based on a small number of echo samples. This paper concerns the estimation of mean Doppler velocity in a coherent radar using a short complex time series. Specific results are presented based on 16 samples. A wide range of signal-to-noise ratios are considered, and attention is given to ease of implementation. It is shown that FFT estimators fare poorly in low SNR and/or high spectrum-width situations. Several variants of a vector pulse-pair processor are postulated and an algorithm is developed for the resolution of phase angle ambiguity. This processor is found to be better than conventional processors at very low SNR values. A feasible approximation to the maximum entropy estimator is derived as well as a technique utilizing the maximization of the periodogram. It is found that a vector pulse-pair processor operating with four lags for clear air observation and a single lag (pulse-pair mode) for storm observation may be a good way to estimate Doppler velocities over the entire gamut of weather phenomena.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Fabens method is commonly used to estimate growth parameters k and l infinity in the von Bertalanffy model from tag-recapture data. However, the Fabens method of estimation has an inherent bias when individual growth is variable. This paper presents an asymptotically unbiassed method using a maximum likelihood approach that takes account of individual variability in both maximum length and age-at-tagging. It is assumed that each individual's growth follows a von Bertalanffy curve with its own maximum length and age-at-tagging. The parameter k is assumed to be a constant to ensure that the mean growth follows a von Bertalanffy curve and to avoid overparameterization. Our method also makes more efficient use nf thp measurements at tno and recapture and includes diagnostic techniques for checking distributional assumptions. The method is reasonably robust and performs better than the Fabens method when individual growth differs from the von Bertalanffy relationship. When measurement error is negligible, the estimation involves maximizing the profile likelihood of one parameter only. The method is applied to tag-recapture data for the grooved tiger prawn (Penaeus semisulcatus) from the Gulf of Carpentaria, Australia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

So far, most Phase II trials have been designed and analysed under a frequentist framework. Under this framework, a trial is designed so that the overall Type I and Type II errors of the trial are controlled at some desired levels. Recently, a number of articles have advocated the use of Bavesian designs in practice. Under a Bayesian framework, a trial is designed so that the trial stops when the posterior probability of treatment is within certain prespecified thresholds. In this article, we argue that trials under a Bayesian framework can also be designed to control frequentist error rates. We introduce a Bayesian version of Simon's well-known two-stage design to achieve this goal. We also consider two other errors, which are called Bayesian errors in this article because of their similarities to posterior probabilities. We show that our method can also control these Bayesian-type errors. We compare our method with other recent Bayesian designs in a numerical study and discuss implications of different designs on error rates. An example of a clinical trial for patients with nasopharyngeal carcinoma is used to illustrate differences of the different designs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article is motivated by a lung cancer study where a regression model is involved and the response variable is too expensive to measure but the predictor variable can be measured easily with relatively negligible cost. This situation occurs quite often in medical studies, quantitative genetics, and ecological and environmental studies. In this article, by using the idea of ranked-set sampling (RSS), we develop sampling strategies that can reduce cost and increase efficiency of the regression analysis for the above-mentioned situation. The developed method is applied retrospectively to a lung cancer study. In the lung cancer study, the interest is to investigate the association between smoking status and three biomarkers: polyphenol DNA adducts, micronuclei, and sister chromatic exchanges. Optimal sampling schemes with different optimality criteria such as A-, D-, and integrated mean square error (IMSE)-optimality are considered in the application. With set size 10 in RSS, the improvement of the optimal schemes over simple random sampling (SRS) is great. For instance, by using the optimal scheme with IMSE-optimality, the IMSEs of the estimated regression functions for the three biomarkers are reduced to about half of those incurred by using SRS.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Error estimates for the error reproducing kernel method (ERKM) are provided. The ERKM is a mesh-free functional approximation scheme [A. Shaw, D. Roy, A NURBS-based error reproducing kernel method with applications in solid mechanics, Computational Mechanics (2006), to appear (available online)], wherein a targeted function and its derivatives are first approximated via non-uniform rational B-splines (NURBS) basis function. Errors in the NURBS approximation are then reproduced via a family of non-NURBS basis functions, constructed using a polynomial reproduction condition, and added to the NURBS approximation of the function obtained in the first step. In addition to the derivation of error estimates, convergence studies are undertaken for a couple of test boundary value problems with known exact solutions. The ERKM is next applied to a one-dimensional Burgers equation where, time evolution leads to a breakdown of the continuous solution and the appearance of a shock. Many available mesh-free schemes appear to be unable to capture this shock without numerical instability. However, given that any desired order of continuity is achievable through NURBS approximations, the ERKM can even accurately approximate functions with discontinuous derivatives. Moreover, due to the variation diminishing property of NURBS, it has advantages in representing sharp changes in gradients. This paper is focused on demonstrating this ability of ERKM via some numerical examples. Comparisons of some of the results with those via the standard form of the reproducing kernel particle method (RKPM) demonstrate the relative numerical advantages and accuracy of the ERKM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Measurements of the ratio of diffusion coefficient to mobility (D/ mu ) of electrons in SF6-N2 and CCl2F2-N2 mixtures over the range 80mean energies epsilon c.mix corresponding to (E/p)c.mix, are found to vary with the percentage of the electronegative gas in the mixture (F) according to the following relationship: (E/p)c.mix=(E/p)c.N(2)+((E/p)c.A-(E/p)c.N(2)) (1-exp(- beta F/100-F)) and epsilon c.mix= epsilon c.N(2)+( epsilon c.A- epsilon c.N(2)) (1-exp(- beta F/100-F)) where A refers to the attaching gas, either SF6 or CCl2F2 and beta is a constant, equal to 2.43 for SF6 mixtures and 5.12 for CCl2F2 mixtures. In the present study, it has been possible to show that beta is indeed to a factor of synergism. Estimated gamma values (secondary ionisation coefficients) did not show any significant variation with F for F<50.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For a wide class of semi-Markov decision processes the optimal policies are expressible in terms of the Gittins indices, which have been found useful in sequential clinical trials and pharmaceutical research planning. In general, the indices can be approximated via calibration based on dynamic programming of finite horizon. This paper provides some results on the accuracy of such approximations, and, in particular, gives the error bounds for some well known processes (Bernoulli reward processes, normal reward processes and exponential target processes).