917 resultados para Gaussian prior variance
Resumo:
In this article, we consider the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noise under three kinds of performance criterions related to the final value of the expectation and variance of the output. In the first problem it is desired to minimise the final variance of the output subject to a restriction on its final expectation, in the second one it is desired to maximise the final expectation of the output subject to a restriction on its final variance, and in the third one it is considered a performance criterion composed by a linear combination of the final variance and expectation of the output of the system. We present explicit sufficient conditions for the existence of an optimal control strategy for these problems, generalising previous results in the literature. We conclude this article presenting a numerical example of an asset liabilities management model for pension funds with regime switching.
Resumo:
In this paper, we deal with a generalized multi-period mean-variance portfolio selection problem with market parameters Subject to Markov random regime switchings. Problems of this kind have been recently considered in the literature for control over bankruptcy, for cases in which there are no jumps in market parameters (see [Zhu, S. S., Li, D., & Wang, S. Y. (2004). Risk control over bankruptcy in dynamic portfolio selection: A generalized mean variance formulation. IEEE Transactions on Automatic Control, 49, 447-457]). We present necessary and Sufficient conditions for obtaining an optimal control policy for this Markovian generalized multi-period meal-variance problem, based on a set of interconnected Riccati difference equations, and oil a set of other recursive equations. Some closed formulas are also derived for two special cases, extending some previous results in the literature. We apply the results to a numerical example with real data for Fisk control over bankruptcy Ill a dynamic portfolio selection problem with Markov jumps selection problem. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
We define a new type of self-similarity for one-parameter families of stochastic processes, which applies to certain important families of processes that are not self-similar in the conventional sense. This includes Hougaard Levy processes such as the Poisson processes, Brownian motions with drift and the inverse Gaussian processes, and some new fractional Hougaard motions defined as moving averages of Hougaard Levy process. Such families have many properties in common with ordinary self-similar processes, including the form of their covariance functions, and the fact that they appear as limits in a Lamperti-type limit theorem for families of stochastic processes.
Resumo:
Joint generalized linear models and double generalized linear models (DGLMs) were designed to model outcomes for which the variability can be explained using factors and/or covariates. When such factors operate, the usual normal regression models, which inherently exhibit constant variance, will under-represent variation in the data and hence may lead to erroneous inferences. For count and proportion data, such noise factors can generate a so-called overdispersion effect, and the use of binomial and Poisson models underestimates the variability and, consequently, incorrectly indicate significant effects. In this manuscript, we propose a DGLM from a Bayesian perspective, focusing on the case of proportion data, where the overdispersion can be modeled using a random effect that depends on some noise factors. The posterior joint density function was sampled using Monte Carlo Markov Chain algorithms, allowing inferences over the model parameters. An application to a data set on apple tissue culture is presented, for which it is shown that the Bayesian approach is quite feasible, even when limited prior information is available, thereby generating valuable insight for the researcher about its experimental results.
Resumo:
A method for the determination of artemether (ART) and its main metabolite dihydroartemisinin (DHA) in plasma employing liquid-phase microextraction (LPME) for sample preparation prior to liquid chromatography-tandem mass spectrometry (LC-MS-MS) was developed. The analytes were extracted from 1 nil, of plasma utilizing a two-phase LPME procedure with artemisinin as internal standard. Using the optimized LPME conditions, mean absolute recovery rates of 25 and 32% for DHA and ART, respectively, were achieved using toluene-n-octanol (1:1, viv) as organic phase with an extraction time of 30 min. After extraction, the analytes were resolved within 5 min using a mobile phase consisting of methanol-ammonium acetate (10 mmol L(-1) pH 5.0, 80:20. v/v) on a laboratory-made column based on poly(methyltetradecylsiloxane) attached to a zirconized-silica support. MS-MS detection was employed using an electrospray interface in the positive ion mode. The method developed was linear over the range of 5-1000 ng mL(-1) for both analytes. Precision and accuracy were within acceptable levels of confidence (<15%). The assay was applied to the determination of these analytes in plasma from rats treated with ART. The two-phase LPME procedure is affordable and the solvent consumption was very low compared to the traditional methods of sample preparation. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The residence time distribution and mean residence time of a 10% sodium bicarbonate solution that is dried in a conventional spouted bed with inert bodies were measured with the stimulus-response method. Methylene blue was used as a chemical tracer, and the effects of the paste feed mode, size distribution of the inert bodies, and mean particle size on the residence times and dried powder properties were investigated. The results showed that the residence time distributions could be best reproduced with the perfect mixing cell model or N = 1 for the continuous stirred tank reactor in a series model. The mean residence times ranged from 6.04 to 12.90 min and were significantly affected by the factors studied. Analysis of variance on the experimental data showed that mean residence times were affected by the mean diameter of the inert bodies at a significance level of 1% and by the size distribution at a level of 5%. Moreover, altering the paste feed from dripping to pneumatic atomization affected mean residence time at a 5% significance level. The dried powder characteristics proved to be adequate for further industrial manipulation, as demonstrated by the low moisture content, narrow range of particle size, and good flow properties. The results of this research are significant in the study of the drying of heat-sensitive materials because it shows that by simultaneously changing the size distribution and average size of the inert bodies, the mean residence times of a paste can be reduced by half, thus decreasing losses due to degradation.
Resumo:
DHEA, a steroid hormone synthesized from cholesterol by cells of the adrenal cortex, plays an essential role in enhancing the host`s resistance to different experimental infections. Receptors for this hormone can be found in distinct immune cells (especially macrophages) that are known to be the first line defense against Trypanosoma cruzi infection. These cells operate through an indirect pathway releasing nitric oxide (NO) and cytokines such TNF-alpha and IL-12 which in turn trigger an enhancement of natural killer cells and lymphocytes which finally secrete pro and anti-inflammatory cytokines. The effects of pre- and post-infection DHEA treatment on production of IL-12, TNF alpha and NO were evaluated. T. cruzi infected macrophages post treated with DHEA displayed enhanced concentrations of TNF-alpha, IL-12 and NO. Probably, the mechanisms that induced the production of cytokines by infected cells are more efficient when the immune system has been stimulated first by parasite invasion, suggesting that the protective role of DHEA is greater when administered post infection. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The Coefficient of Variance (mean standard deviation/mean Response time) is a measure of response time variability that corrects for differences in mean Response time (RT) (Segalowitz & Segalowitz, 1993). A positive correlation between decreasing mean RTs and CVs (rCV-RT) has been proposed as an indicator of L2 automaticity and more generally as an index of processing efficiency. The current study evaluates this claim by examining lexical decision performance by individuals from three levels of English proficiency (Intermediate ESL, Advanced ESL and L1 controls) on stimuli from four levels of item familiarity, as defined by frequency of occurrence. A three-phase model of skill development defined by changing rCV-RT.values was tested. Results showed that RTs and CVs systematically decreased as a function of increasing proficiency and frequency levels, with the rCV-RT serving as a stable indicator of individual differences in lexical decision performance. The rCV-RT and automaticity/restructuring account is discussed in light of the findings. The CV is also evaluated as a more general quantitative index of processing efficiency in the L2.
Resumo:
Predicted area under curve (AUC), mean transit time (MTT) and normalized variance (CV2) data have been compared for parent compound and generated metabolite following an impulse input into the liver, Models studied were the well-stirred (tank) model, tube model, a distributed tube model, dispersion model (Danckwerts and mixed boundary conditions) and tanks-in-series model. It is well known that discrimination between models for a parent solute is greatest when the parent solute is highly extracted by the liver. With the metabolite, greatest model differences for MTT and CV2 occur when parent solute is poorly extracted. In all cases the predictions of the distributed tube, dispersion, and tasks-in-series models are between the predictions of the rank and tube models. The dispersion model with mixed boundary conditions yields identical predictions to those for the distributed tube model (assuming an inverse gaussian distribution of tube transit times). The dispersion model with Danckwerts boundary conditions and the tanks-in series models give similar predictions to the dispersion (mixed boundary conditions) and the distributed tube. The normalized variance for parent compound is dependent upon hepatocyte permeability only within a distinct range of permeability values. This range is similar for each model but the order of magnitude predicted for normalized variance is model dependent. Only for a one-compartment system is the MIT for generated metabolite equal to the sum of MTTs for the parent compound and preformed metabolite administered as parent.
Resumo:
The conventional convection-dispersion (also called axial dispersion) model is widely used to interrelate hepatic availability (F) and clearance (Cl) with the morphology and physiology of the liver and to predict effects such as changes in liver blood flow on F and Cl. An extended form of the convection-dispersion model has been developed to adequately describe the outflow concentration-time profiles for vascular markers at both short and long times after bolus injections into perfused livers. The model, based on flux concentration and a convolution of catheters and large vessels, assumes that solute elimination in hepatocytes follows either fast distribution into or radial diffusion in hepatocytes. The model includes a secondary vascular compartment, postulated to be interconnecting sinusoids. Analysis of the mean hepatic transit time (MTT) and normalized variance (CV2) of solutes with extraction showed that the discrepancy between the predictions of MTT and CV2 for the extended and conventional models are essentially identical irrespective of the magnitude of rate constants representing permeability, volume, and clearance parameters, providing that there is significant hepatic extraction. In conclusion, the application of a newly developed extended convection-dispersion model has shown that the unweighted conventional convection-dispersion model can be used to describe the disposition of extracted solutes and, in particular, to estimate hepatic availability and clearance in booth experimental and clinical situations.
Resumo:
There is concern over the safety of calcium channel blockers (CCBs) in acute coronary disease. We sought to determine if patients taking calcium channel blockers (CCBs) at the time of admission with acute myocardial infarction (AMI) had a higher case-fatality compared with those taking beta-blockers or neither medication. Clinical and drug treatment variables at the time of hospital admission predictive of survival at 28 days were examined in a community-based registry of patients aged under 65 years admitted to hospital for suspected AMI in Perth, Australia, between 1984 and 1993. Among 7766 patients, 1291 (16.6%) were taking a CCB and 1259 (16.2%) a betablocker alone at hospital admission. Patients taking CCBs had a worse clinical profile than those taking a beta-blocker alone or neither drug (control group), and a higher unadjusted 28-day mortality (17.6% versus 9.3% and 11.1% respectively, both P < 0.001). There was no significant heterogeneity with respect to mortality between nifedipine, diltiazem, or verapamil when used alone, or with a beta-blocker. After adjustment for factors predictive of death at 28 days, patients taking a CCB were found not to have an excess chance of death compared with the control group (odds ratio [OR] 1.06, 95% confidence interval [CI]; 0.87, 1.30), whereas those taking a beta-blocker alone had a lower odds of death (OR 0.75, 95% CI; 0.59, 0.94). These results indicate that established calcium channel blockade is not associated with an excess risk of death following AMI once other differences between patients are taken into account, but neither does it have the survival advantage seen with prior beta-blocker therapy.
Resumo:
Off-resonance RF pre-saturation was used to obtain contrast in MRI images of polymer gel dosimeters irradiated to doses up to 50 Gy. Two different polymer gel dosimeters composed of 2-hydroxyethyl-acryl ate or methacrylic acid monomers mixed with N, N'-methylene-bisacrylamide (BIS), dispersed in an aqueous gelatin matrix were evaluated. Radiation-induced polymerization of the co-monomers generates a fast-relaxing insoluble polymer. Saturation of the polymer using off-resonance Gaussian RF pulses prior to a spin-echo read-out with a short echo time leads to contrast that is dependent on the absorbed dose. This contrast is attributed to magnetization transfer (MT) between free water and the polymer, and direct saturation of water was found to be negligible under the prevailing experimental conditions. The usefulness of MT imaging was assessed by computing the dose resolution obtained with this technique. We found a low value of dose resolution over a wide range of doses could be obtained with a single experiment. This is an advantage over multiple spin echo (MSE) experiments using a single echo spacing where an optimal dose resolution is achieved over only very limited ranges of doses. The results suggest MT imaging protocols may be developed into a useful tool for polymer gel dosimetry.
Resumo:
This paper proposes the use of the q-Gaussian mutation with self-adaptation of the shape of the mutation distribution in evolutionary algorithms. The shape of the q-Gaussian mutation distribution is controlled by a real parameter q. In the proposed method, the real parameter q of the q-Gaussian mutation is encoded in the chromosome of individuals and hence is allowed to evolve during the evolutionary process. In order to test the new mutation operator, evolution strategy and evolutionary programming algorithms with self-adapted q-Gaussian mutation generated from anisotropic and isotropic distributions are presented. The theoretical analysis of the q-Gaussian mutation is also provided. In the experimental study, the q-Gaussian mutation is compared to Gaussian and Cauchy mutations in the optimization of a set of test functions. Experimental results show the efficiency of the proposed method of self-adapting the mutation distribution in evolutionary algorithms.
Resumo:
We discuss the expectation propagation (EP) algorithm for approximate Bayesian inference using a factorizing posterior approximation. For neural network models, we use a central limit theorem argument to make EP tractable when the number of parameters is large. For two types of models, we show that EP can achieve optimal generalization performance when data are drawn from a simple distribution.