947 resultados para Reduced-order Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: We developed a population model that describes the ocular penetration and pharmacokinetics of penciclovir in human aqueous humour and plasma after oral administration of famciclovir. METHODS: Fifty-three patients undergoing cataract surgery received a single oral dose of 500 mg of famciclovir prior to surgery. Concentrations of penciclovir in both plasma and aqueous humour were measured by HPLC with fluorescence detection. Concentrations in plasma and aqueous humour were fitted using a two-compartment model (NONMEM software). Inter-individual and intra-individual variabilities were quantified and the influence of demographics and physiopathological and environmental variables on penciclovir pharmacokinetics was explored. RESULTS: Drug concentrations were fitted using a two-compartment, open model with first-order transfer rates between plasma and aqueous humour compartments. Among tested covariates, creatinine clearance, co-intake of angiotensin-converting enzyme inhibitors and body weight significantly influenced penciclovir pharmacokinetics. Plasma clearance was 22.8 ± 9.1 L/h and clearance from the aqueous humour was 8.2 × 10(-5) L/h. AUCs were 25.4 ± 10.2 and 6.6 ± 1.8 μg · h/mL in plasma and aqueous humour, respectively, yielding a penetration ratio of 0.28 ± 0.06. Simulated concentrations in the aqueous humour after administration of 500 mg of famciclovir three times daily were in the range of values required for 50% growth inhibition of non-resistant strains of the herpes zoster virus family. CONCLUSIONS: Plasma and aqueous penciclovir concentrations showed significant variability that could only be partially explained by renal function, body weight and comedication. Concentrations in the aqueous humour were much lower than in plasma, suggesting that factors in the blood-aqueous humour barrier might prevent its ocular penetration or that redistribution occurs in other ocular compartments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Consider the problem of testing k hypotheses simultaneously. In this paper,we discuss finite and large sample theory of stepdown methods that providecontrol of the familywise error rate (FWE). In order to improve upon theBonferroni method or Holm's (1979) stepdown method, Westfall and Young(1993) make eective use of resampling to construct stepdown methods thatimplicitly estimate the dependence structure of the test statistics. However,their methods depend on an assumption called subset pivotality. The goalof this paper is to construct general stepdown methods that do not requiresuch an assumption. In order to accomplish this, we take a close look atwhat makes stepdown procedures work, and a key component is a monotonicityrequirement of critical values. By imposing such monotonicity on estimatedcritical values (which is not an assumption on the model but an assumptionon the method), it is demonstrated that the problem of constructing a validmultiple test procedure which controls the FWE can be reduced to the problemof contructing a single test which controls the usual probability of a Type 1error. This reduction allows us to draw upon an enormous resamplingliterature as a general means of test contruction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper argues that economic rationality and ethical behavior cannotbe reduced one to the other, casting doubts on the validity of formulaslike 'profit is ethical' or 'ethics pays'. In order to express ethicaldilemmas as opposing economic interest with ethical concerns, we proposea model of rational behavior that combines these two irreducible dimensions in an open but not arbitrary manner. Behaviors that are neither ethicalnor profitable are considered irrational (non-arbitrariness). However,behaviors that are profitable but unethical, and behaviors that are ethicalbut not profitable, are all treated as rational (openness). Combiningethical concerns with economic interest, ethical business is in turn anoptimal form of rationality between venality and sacrifice.Because every one prefers to communicate that he acts ethically, ethicalbusiness remains ambiguous until some economic interest is actuallysacrificed. We argue however that ethical business has an interest indemonstrating its consistency between communication and behavior by atransparent attitude. On the other hand, venal behaviors must remainconfidential to hide the corresponding lack of consistency. Thisdiscursive approach based on transparency and confidentiality helpsto further distinguish between ethical and unethical business behaviors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Studies assessing skin irritation to chemicals have traditionally used laboratory animals; however, such methods are questionable regarding their relevance for humans. New in vitro methods have been validated, such as the reconstructed human epidermis (RHE) model (Episkin®, Epiderm®). The comparison (accuracy) with in vivo results such as the 4-h human patch test (HPT) is 76% at best (Epiderm®). There is a need to develop an in vitro method that better simulates the anatomo-pathological changes encountered in vivo. To develop an in vitro method to determine skin irritation using human viable skin through histopathology, and compare the results of 4 tested substances to the main in vitro methods and in vivo animal method (Draize test). Human skin removed during surgery was dermatomed and mounted on an in vitro flow-through diffusion cell system. Ten chemicals with known non-irritant (heptylbutyrate, hexylsalicylate, butylmethacrylate, isoproturon, bentazon, DEHP and methylisothiazolinone (MI)) and irritant properties (folpet, 1-bromohexane and methylchloroisothiazolinone (MCI/MI)), a negative control (sodiumchloride) and a positive control (sodiumlaurylsulphate) were applied. The skin was exposed at least for 4h. Histopathology was performed to investigate irritation signs (spongiosis, necrosis, vacuolization). We obtained 100% accuracy with the HPT model; 75% with the RHE models and 50% with the Draize test for 4 tested substances. The coefficients of variation (CV) between our three test batches were <0.1, showing good reproducibility. Furthermore, we reported objectively histopathological irritation signs (irritation scale): strong (folpet), significant (1-bromohexane), slight (MCI/MI at 750/250ppm) and none (isoproturon, bentazon, DEHP and MI). This new in vitro test method presented effective results for the tested chemicals. It should be further validated using a greater number of substances; and tested in different laboratories in order to suitably evaluate reproducibility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

By means of classical Itô's calculus we decompose option prices asthe sum of the classical Black-Scholes formula with volatility parameterequal to the root-mean-square future average volatility plus a term dueby correlation and a term due to the volatility of the volatility. Thisdecomposition allows us to develop first and second-order approximationformulas for option prices and implied volatilities in the Heston volatilityframework, as well as to study their accuracy. Numerical examples aregiven.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper argues that a large technological innovation may lead to a merger wave by inducing entrepreneurs to seek funds from technologically knowledgeable firms -experts. When a large technological innovation occurs, the ability of non-experts (banks) to discriminate between good and bad quality projects is reduced. Experts can continue to charge a low rate of interest for financing because their expertise enables them to identify good quality projects and to avoid unprofitable investments. On the other hand, non-experts now charge a higher rate of interest in order to screen bad projects. More entrepreneurs, therefore, disclose their projects to experts to raise funds from them. Such experts are, however, able to copy the projects and disclosure to them invites the possibility of competition. Thus the entrepreneur and the expert may merge so as to achieve product market collusion. As well as rationalizing mergers, the model can also explain various forms of venture financing by experts such as corporate investors and business angels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents several applications to interest rate risk managementbased on a two-factor continuous-time model of the term structure of interestrates previously presented in Moreno (1996). This model assumes that defaultfree discount bond prices are determined by the time to maturity and twofactors, the long-term interest rate and the spread (difference between thelong-term rate and the short-term (instantaneous) riskless rate). Several newmeasures of ``generalized duration" are presented and applied in differentsituations in order to manage market risk and yield curve risk. By means ofthese measures, we are able to compute the hedging ratios that allows us toimmunize a bond portfolio by means of options on bonds. Focusing on thehedging problem, it is shown that these new measures allow us to immunize abond portfolio against changes (parallel and/or in the slope) in the yieldcurve. Finally, a proposal of solution of the limitations of conventionalduration by means of these new measures is presented and illustratednumerically.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We lay out a small open economy version of the Calvo sticky price model, and show how the equilibrium dynamics can be reduced to simple representation in domestic inflation and the output gap. We use the resulting framework to analyze the macroeconomic implications of three alternative rule-based policy regimes for the small open economy: domestic inflation and CPI-based Taylor rules, and an exchange rate peg. We show that a key difference amongthese regimes lies in the relative amount of exchange rate volatility that they entail. We also discuss a special case for which domestic inflation targeting constitutes the optimal policy, and where a simple second order approximation to the utility of the representative consumer can be derived and used to evaluate the welfare losses associated with the suboptimal rules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The two essential features of a decentralized economy taken intoaccount are, first, that individual agents need some informationabout other agents in order to meet potential trading partners,which requires some communication or interaction between theseagents, and second, that in general agents will face tradinguncertainty. We consider trade in a homogeneous commodity. Firmsdecide upon their effective supplies, and may create their ownmarkets by sending information signals communicating theirwillingness to sell. Meeting of potential trading partners isarranged in the form of shopping by consumers. The questions to beconsidered are: How do firms compete in such markets? And what arethe properties of an equilibrium? We establish existenceconditions for a symmetric Nash equilibrium in the firms'strategies, and analyze its characteristics. The developedframework appears to lend itself well to study many typicalphenomena of decentralized economies, such as the emergence ofcentral markets, the role of middlemen, and price-making.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most facility location decision models ignore the fact that for a facility to survive it needs a minimum demand level to cover costs. In this paper we present a decision model for a firm thatwishes to enter a spatial market where there are several competitors already located. This market is such that for each outlet there is a demand threshold level that has to be achievedin order to survive. The firm wishes to know where to locate itsoutlets so as to maximize its market share taking into account the threshold level. It may happen that due to this new entrance, some competitors will not be able to meet the threshold and therefore will disappear. A formulation is presented together with a heuristic solution method and computational experience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many research areas (such as public health, environmental contamination, and others) one deals with the necessity of using data to infer whether some proportion (%) of a population of interest is (or one wants it to be) below and/or over some threshold, through the computation of tolerance interval. The idea is, once a threshold is given, one computes the tolerance interval or limit (which might be one or two - sided bounded) and then to check if it satisfies the given threshold. Since in this work we deal with the computation of one - sided tolerance interval, for the two-sided case we recomend, for instance, Krishnamoorthy and Mathew [5]. Krishnamoorthy and Mathew [4] performed the computation of upper tolerance limit in balanced and unbalanced one-way random effects models, whereas Fonseca et al [3] performed it based in a similar ideas but in a tow-way nested mixed or random effects model. In case of random effects model, Fonseca et al [3] performed the computation of such interval only for the balanced data, whereas in the mixed effects case they dit it only for the unbalanced data. For the computation of twosided tolerance interval in models with mixed and/or random effects we recomend, for instance, Sharma and Mathew [7]. The purpose of this paper is the computation of upper and lower tolerance interval in a two-way nested mixed effects models in balanced data. For the case of unbalanced data, as mentioned above, Fonseca et al [3] have already computed upper tolerance interval. Hence, using the notions persented in Fonseca et al [3] and Krishnamoorthy and Mathew [4], we present some results on the construction of one-sided tolerance interval for the balanced case. Thus, in order to do so at first instance we perform the construction for the upper case, and then the construction for the lower case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The lpr gene has recently been shown to encode a functional mutation in the Fas receptor, a molecule involved in transducing apoptotic signals. Mice homozygous for the lpr gene develop an autoimmune syndrome accompanied by massive accumulation of double-negative (DN) CD4-8-B220+ T cell receptor-alpha/beta+ cells. In order to investigate the origin of these DN T cells, we derived lpr/lpr mice lacking major histocompatibility complex (MHC) class I molecules by intercrossing them with beta 2-microglobulin (beta 2m)-deficient mice. Interestingly, these lpr beta 2m-/- mice develop 13-fold fewer DNT cells in lymph nodes as compared to lpr/lpr wild-type (lprWT) mice. Analysis of anti-DNA antibodies and rheumatoid factor in serum demonstrates that lpr beta 2m-/- mice produce comparable levels of autoantibodies to lprWT mice. Collectively our data indicate that MHC class I molecules control the development of DN T cells but not autoantibody production in lpr/lpr mice and support the hypothesis that the majority of DN T cells may be derived from cells of the CD8 lineage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given the adverse impact of image noise on the perception of important clinical details in digital mammography, routine quality control measurements should include an evaluation of noise. The European Guidelines, for example, employ a second-order polynomial fit of pixel variance as a function of detector air kerma (DAK) to decompose noise into quantum, electronic and fixed pattern (FP) components and assess the DAK range where quantum noise dominates. This work examines the robustness of the polynomial method against an explicit noise decomposition method. The two methods were applied to variance and noise power spectrum (NPS) data from six digital mammography units. Twenty homogeneously exposed images were acquired with PMMA blocks for target DAKs ranging from 6.25 to 1600 µGy. Both methods were explored for the effects of data weighting and squared fit coefficients during the curve fitting, the influence of the additional filter material (2 mm Al versus 40 mm PMMA) and noise de-trending. Finally, spatial stationarity of noise was assessed.Data weighting improved noise model fitting over large DAK ranges, especially at low detector exposures. The polynomial and explicit decompositions generally agreed for quantum and electronic noise but FP noise fraction was consistently underestimated by the polynomial method. Noise decomposition as a function of position in the image showed limited noise stationarity, especially for FP noise; thus the position of the region of interest (ROI) used for noise decomposition may influence fractional noise composition. The ROI area and position used in the Guidelines offer an acceptable estimation of noise components. While there are limitations to the polynomial model, when used with care and with appropriate data weighting, the method offers a simple and robust means of examining the detector noise components as a function of detector exposure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In adult mammals, neural progenitors located in the dentate gyrus retain their ability to generate neurons and glia throughout lifetime. In rodents, increased production of new granule neurons is associated with improved memory capacities, while decreased hippocampal neurogenesis results in impaired memory performance in several memory tasks. In mouse models of Alzheimer's disease, neurogenesis is impaired and the granule neurons that are generated fail to integrate existing networks. Thus, enhancing neurogenesis should improve functional plasticity in the hippocampus and restore cognitive deficits in these mice. Here, we performed a screen of transcription factors that could potentially enhance adult hippocampal neurogenesis. We identified Neurod1 as a robust neuronal determinant with the capability to direct hippocampal progenitors towards an exclusive granule neuron fate. Importantly, Neurod1 also accelerated neuronal maturation and functional integration of new neurons during the period of their maturation when they contribute to memory processes. When tested in an APPxPS1 mouse model of Alzheimer's disease, directed expression of Neurod1 in cycling hippocampal progenitors conspicuously reduced dendritic spine density deficits on new hippocampal neurons, to the same level as that observed in healthy age-matched control animals. Remarkably, this population of highly connected new neurons was sufficient to restore spatial memory in these diseased mice. Collectively our findings demonstrate that endogenous neural stem cells of the diseased brain can be manipulated to become new neurons that could allow cognitive improvement.