944 resultados para Variable pricing model


Relevância:

40.00% 40.00%

Publicador:

Resumo:

We analyze a finite horizon, single product, periodic review model in which pricing and production/inventory decisions are made simultaneously. Demands in different periods are random variables that are independent of each other and their distributions depend on the product price. Pricing and ordering decisions are made at the beginning of each period and all shortages are backlogged. Ordering cost includes both a fixed cost and a variable cost proportional to the amount ordered. The objective is to find an inventory policy and a pricing strategy maximizing expected profit over the finite horizon. We show that when the demand model is additive, the profit-to-go functions are k-concave and hence an (s,S,p) policy is optimal. In such a policy, the period inventory is managed based on the classical (s,S) policy and price is determined based on the inventory position at the beginning of each period. For more general demand functions, i.e., multiplicative plus additive functions, we demonstrate that the profit-to-go function is not necessarily k-concave and an (s,S,p) policy is not necessarily optimal. We introduce a new concept, the symmetric k-concave functions and apply it to provide a characterization of the optimal policy.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We analyze an infinite horizon, single product, periodic review model in which pricing and production/inventory decisions are made simultaneously. Demands in different periods are identically distributed random variables that are independent of each other and their distributions depend on the product price. Pricing and ordering decisions are made at the beginning of each period and all shortages are backlogged. Ordering cost includes both a fixed cost and a variable cost proportional to the amount ordered. The objective is to maximize expected discounted, or expected average profit over the infinite planning horizon. We show that a stationary (s,S,p) policy is optimal for both the discounted and average profit models with general demand functions. In such a policy, the period inventory is managed based on the classical (s,S) policy and price is determined based on the inventory position at the beginning of each period.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we introduce a financial market model based on continuos time random motions with alternanting constant velocities and with jumps ocurring when the velocity switches. if jump directions are in the certain corresondence with the velocity directions of the underlyng random motion with respect to the interest rate, the model is free of arbitrage. The replicating strategies for options are constructed in details. Closed form formulas for the opcion prices are obtained.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The recent roll-out of smart metering technologies in several developed countries has intensified research on the impacts of Time-of-Use (TOU) pricing on consumption. This paper analyses a TOU dataset from the Province of Trento in Northern Italy using a stochastic adjustment model. Findings highlight the non-steadiness of the relationship between consumption and TOU price. Weather and active occupancy can partly explain future consumption in relation to price.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A basic data requirement of a river flood inundation model is a Digital Terrain Model (DTM) of the reach being studied. The scale at which modeling is required determines the accuracy required of the DTM. For modeling floods in urban areas, a high resolution DTM such as that produced by airborne LiDAR (Light Detection And Ranging) is most useful, and large parts of many developed countries have now been mapped using LiDAR. In remoter areas, it is possible to model flooding on a larger scale using a lower resolution DTM, and in the near future the DTM of choice is likely to be that derived from the TanDEM-X Digital Elevation Model (DEM). A variable-resolution global DTM obtained by combining existing high and low resolution data sets would be useful for modeling flood water dynamics globally, at high resolution wherever possible and at lower resolution over larger rivers in remote areas. A further important data resource used in flood modeling is the flood extent, commonly derived from Synthetic Aperture Radar (SAR) images. Flood extents become more useful if they are intersected with the DTM, when water level observations (WLOs) at the flood boundary can be estimated at various points along the river reach. To illustrate the utility of such a global DTM, two examples of recent research involving WLOs at opposite ends of the spatial scale are discussed. The first requires high resolution spatial data, and involves the assimilation of WLOs from a real sequence of high resolution SAR images into a flood model to update the model state with observations over time, and to estimate river discharge and model parameters, including river bathymetry and friction. The results indicate the feasibility of such an Earth Observation-based flood forecasting system. The second example is at a larger scale, and uses SAR-derived WLOs to improve the lower-resolution TanDEM-X DEM in the area covered by the flood extents. The resulting reduction in random height error is significant.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Arctic sea ice cover is thinning and retreating, causing changes in surface roughness that in turn modify the momentum flux from the atmosphere through the ice into the ocean. New model simulations comprising variable sea ice drag coefficients for both the air and water interface demonstrate that the heterogeneity in sea ice surface roughness significantly impacts the spatial distribution and trends of ocean surface stress during the last decades. Simulations with constant sea ice drag coefficients as used in most climate models show an increase in annual mean ocean surface stress (0.003 N/m2 per decade, 4.6%) due to the reduction of ice thickness leading to a weakening of the ice and accelerated ice drift. In contrast, with variable drag coefficients our simulations show annual mean ocean surface stress is declining at a rate of -0.002 N/m2 per decade (3.1%) over the period 1980-2013 because of a significant reduction in surface roughness associated with an increasingly thinner and younger sea ice cover. The effectiveness of sea ice in transferring momentum does not only depend on its resistive strength against the wind forcing but is also set by its top and bottom surface roughness varying with ice types and ice conditions. This reveals the need to account for sea ice surface roughness variations in climate simulations in order to correctly represent the implications of sea ice loss under global warming.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Esta dissertação se propõe ao estudo de inferência usando estimação por método generalizado dos momentos (GMM) baseado no uso de instrumentos. A motivação para o estudo está no fato de que sob identificação fraca dos parâmetros, a inferência tradicional pode levar a resultados enganosos. Dessa forma, é feita uma revisão dos mais usuais testes para superar tal problema e uma apresentação dos arcabouços propostos por Moreira (2002) e Moreira & Moreira (2013), e Kleibergen (2005). Com isso, o trabalho concilia as estatísticas utilizadas por eles para realizar inferência e reescreve o teste score proposto em Kleibergen (2005) utilizando as estatísticas de Moreira & Moreira (2013), e é obtido usando a teoria assintótica em Newey & McFadden (1984) a estatística do teste score ótimo. Além disso, mostra-se a equivalência entre a abordagem por GMM e a que usa sistema de equações e verossimilhança para abordar o problema de identificação fraca.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In most studies on beef cattle longevity, only the cows reaching a given number of calvings by a specific age are considered in the analyses. With the aim of evaluating all cows with productive life in herds, taking into consideration the different forms of management on each farm, it was proposed to measure cow longevity from age at last calving (ALC), that is, the most recent calving registered in the files. The objective was to characterize this trait in order to study the longevity of Nellore cattle, using the Kaplan-Meier estimators and the Cox model. The covariables and class effects considered in the models were age at first calving (AFC), year and season of birth of the cow and farm. The variable studied (ALC) was classified as presenting complete information (uncensored = 1) or incomplete information (censored = 0), using the criterion of the difference between the date of each cow's last calving and the date of the latest calving at each farm. If this difference was >36 months, the cow was considered to have failed. If not, this cow was censored, thus indicating that future calving remained possible for this cow. The records of 11 791 animals from 22 farms within the Nellore Breed Genetic Improvement Program ('Nellore Brazil') were used. In the estimation process using the Kaplan-Meier model, the variable of AFC was classified into three age groups. In individual analyses, the log-rank test and the Wilcoxon test in the Kaplan-Meier model showed that all covariables and class effects had significant effects (P < 0.05) on ALC. In the analysis considering all covariables and class effects, using the Wald test in the Cox model, only the season of birth of the cow was not significant for ALC (P > 0.05). This analysis indicated that each month added to AFC diminished the risk of the cow's failure in the herd by 2%. Nonetheless, this does not imply that animals with younger AFC had less profitability. Cows with greater numbers of calvings were more precocious than those with fewer calvings. Copyright © The Animal Consortium 2012.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of the thesi is to formulate a suitable Item Response Theory (IRT) based model to measure HRQoL (as latent variable) using a mixed responses questionnaire and relaxing the hypothesis of normal distributed latent variable. The new model is a combination of two models already presented in literature, that is, a latent trait model for mixed responses and an IRT model for Skew Normal latent variable. It is developed in a Bayesian framework, a Markov chain Monte Carlo procedure is used to generate samples of the posterior distribution of the parameters of interest. The proposed model is test on a questionnaire composed by 5 discrete items and one continuous to measure HRQoL in children, the EQ-5D-Y questionnaire. A large sample of children collected in the schools was used. In comparison with a model for only discrete responses and a model for mixed responses and normal latent variable, the new model has better performances, in term of deviance information criterion (DIC), chain convergences times and precision of the estimates.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND: Loss-of-function mutations in SCN5A, the gene encoding Na(v)1.5 Na+ channel, are associated with inherited cardiac conduction defects and Brugada syndrome, which both exhibit variable phenotypic penetrance of conduction defects. We investigated the mechanisms of this heterogeneity in a mouse model with heterozygous targeted disruption of Scn5a (Scn5a(+/-) mice) and compared our results to those obtained in patients with loss-of-function mutations in SCN5A. METHODOLOGY/PRINCIPAL FINDINGS: Based on ECG, 10-week-old Scn5a(+/-) mice were divided into 2 subgroups, one displaying severe ventricular conduction defects (QRS interval>18 ms) and one a mild phenotype (QRS< or = 18 ms; QRS in wild-type littermates: 10-18 ms). Phenotypic difference persisted with aging. At 10 weeks, the Na+ channel blocker ajmaline prolonged QRS interval similarly in both groups of Scn5a(+/-) mice. In contrast, in old mice (>53 weeks), ajmaline effect was larger in the severely affected subgroup. These data matched the clinical observations on patients with SCN5A loss-of-function mutations with either severe or mild conduction defects. Ventricular tachycardia developed in 5/10 old severely affected Scn5a(+/-) mice but not in mildly affected ones. Correspondingly, symptomatic SCN5A-mutated Brugada patients had more severe conduction defects than asymptomatic patients. Old severely affected Scn5a(+/-) mice but not mildly affected ones showed extensive cardiac fibrosis. Mildly affected Scn5a(+/-) mice had similar Na(v)1.5 mRNA but higher Na(v)1.5 protein expression, and moderately larger I(Na) current than severely affected Scn5a(+/-) mice. As a consequence, action potential upstroke velocity was more decreased in severely affected Scn5a(+/-) mice than in mildly affected ones. CONCLUSIONS: Scn5a(+/-) mice show similar phenotypic heterogeneity as SCN5A-mutated patients. In Scn5a(+/-) mice, phenotype severity correlates with wild-type Na(v)1.5 protein expression.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The purpose of this study is to develop statistical methodology to facilitate indirect estimation of the concentration of antiretroviral drugs and viral loads in the prostate gland and the seminal vesicle. The differences in antiretroviral drug concentrations in these organs may lead to suboptimal concentrations in one gland compared to the other. Suboptimal levels of the antiretroviral drugs will not be able to fully suppress the virus in that gland, lead to a source of sexually transmissible virus and increase the chance of selecting for drug resistant virus. This information may be useful selecting antiretroviral drug regimen that will achieve optimal concentrations in most of male genital tract glands. Using fractionally collected semen ejaculates, Lundquist (1949) measured levels of surrogate markers in each fraction that are uniquely produced by specific male accessory glands. To determine the original glandular concentrations of the surrogate markers, Lundquist solved a simultaneous series of linear equations. This method has several limitations. In particular, it does not yield a unique solution, it does not address measurement error, and it disregards inter-subject variability in the parameters. To cope with these limitations, we developed a mechanistic latent variable model based on the physiology of the male genital tract and surrogate markers. We employ a Bayesian approach and perform a sensitivity analysis with regard to the distributional assumptions on the random effects and priors. The model and Bayesian approach is validated on experimental data where the concentration of a drug should be (biologically) differentially distributed between the two glands. In this example, the Bayesian model-based conclusions are found to be robust to model specification and this hierarchical approach leads to more scientifically valid conclusions than the original methodology. In particular, unlike existing methods, the proposed model based approach was not affected by a common form of outliers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

There is a need by engine manufactures for computationally efficient and accurate predictive combustion modeling tools for integration in engine simulation software for the assessment of combustion system hardware designs and early development of engine calibrations. This thesis discusses the process for the development and validation of a combustion modeling tool for Gasoline Direct Injected Spark Ignited Engine with variable valve timing, lift and duration valvetrain hardware from experimental data. Data was correlated and regressed from accepted methods for calculating the turbulent flow and flame propagation characteristics for an internal combustion engine. A non-linear regression modeling method was utilized to develop a combustion model to determine the fuel mass burn rate at multiple points during the combustion process. The computational fluid dynamic software Converge ©, was used to simulate and correlate the 3-D combustion system, port and piston geometry to the turbulent flow development within the cylinder to properly predict the experimental data turbulent flow parameters through the intake, compression and expansion processes. The engine simulation software GT-Power © is then used to determine the 1-D flow characteristics of the engine hardware being tested to correlate the regressed combustion modeling tool to experimental data to determine accuracy. The results of the combustion modeling tool show accurate trends capturing the combustion sensitivities to turbulent flow, thermodynamic and internal residual effects with changes in intake and exhaust valve timing, lift and duration.