58 resultados para Dynamic Load Model
Resumo:
Customer choice behavior, such as 'buy-up' and 'buy-down', is an importantphe-nomenon in a wide range of industries. Yet there are few models ormethodologies available to exploit this phenomenon within yield managementsystems. We make some progress on filling this void. Specifically, wedevelop a model of yield management in which the buyers' behavior ismodeled explicitly using a multi-nomial logit model of demand. Thecontrol problem is to decide which subset of fare classes to offer ateach point in time. The set of open fare classes then affects the purchaseprobabilities for each class. We formulate a dynamic program todetermine the optimal control policy and show that it reduces to a dynamicnested allocation policy. Thus, the optimal choice-based policy caneasily be implemented in reservation systems that use nested allocationcontrols. We also develop an estimation procedure for our model based onthe expectation-maximization (EM) method that jointly estimates arrivalrates and choice model parameters when no-purchase outcomes areunobservable. Numerical results show that this combined optimization-estimation approach may significantly improve revenue performancerelative to traditional leg-based models that do not account for choicebehavior.
Resumo:
We estimate an open economy dynamic stochastic general equilibrium (DSGE)model of Australia with a number of shocks, frictions and rigidities, matching alarge number of observable time series. We find that both foreign and domesticshocks are important drivers of the Australian business cycle.We also find that theinitial impact on inflation of an increase in demand for Australian commoditiesis negative, due to an improvement in the real exchange rate, though there is apersistent positive effect on inflation that dominates at longer horizons.
Resumo:
Many dynamic revenue management models divide the sale period into a finite number of periods T and assume, invoking a fine-enough grid of time, that each period sees at most one booking request. These Poisson-type assumptions restrict the variability of the demand in the model, but researchers and practitioners were willing to overlook this for the benefit of tractability of the models. In this paper, we criticize this model from another angle. Estimating the discrete finite-period model poses problems of indeterminacy and non-robustness: Arbitrarily fixing T leads to arbitrary control values and on the other hand estimating T from data adds an additional layer of indeterminacy. To counter this, we first propose an alternate finite-population model that avoids this problem of fixing T and allows a wider range of demand distributions, while retaining the useful marginal-value properties of the finite-period model. The finite-population model still requires jointly estimating market size and the parameters of the customer purchase model without observing no-purchases. Estimation of market-size when no-purchases are unobservable has rarely been attempted in the marketing or revenue management literature. Indeed, we point out that it is akin to the classical statistical problem of estimating the parameters of a binomial distribution with unknown population size and success probability, and hence likely to be challenging. However, when the purchase probabilities are given by a functional form such as a multinomial-logit model, we propose an estimation heuristic that exploits the specification of the functional form, the variety of the offer sets in a typical RM setting, and qualitative knowledge of arrival rates. Finally we perform simulations to show that the estimator is very promising in obtaining unbiased estimates of population size and the model parameters.
Resumo:
Climate science indicates that climate stabilization requires low GHG emissions. Is thisconsistent with nondecreasing human welfare?Our welfare or utility index emphasizes education, knowledge, and the environment. Weconstruct and calibrate a multigenerational model with intertemporal links provided by education,physical capital, knowledge and the environment.We reject discounted utilitarianism and adopt, first, the Pure Sustainability Optimization (orIntergenerational Maximin) criterion, and, second, the Sustainable Growth Optimization criterion,that maximizes the utility of the first generation subject to a given future rate of growth. We applythese criteria to our calibrated model via a novel algorithm inspired by the turnpike property.The computed paths yield levels of utility higher than the level at reference year 2000 for allgenerations. They require the doubling of the fraction of labor resources devoted to the creation ofknowledge relative to the reference level, whereas the fractions of labor allocated to consumptionand leisure are similar to the reference ones. On the other hand, higher growth rates requiresubstantial increases in the fraction of labor devoted to education, together with moderate increasesin the fractions of labor devoted to knowledge and the investment in physical capital.
Resumo:
In a previous paper a novel Generalized Multiobjective Multitree model (GMM-model) was proposed. This model considers for the first time multitree-multicast load balancing with splitting in a multiobjective context, whose mathematical solution is a whole Pareto optimal set that can include several results than it has been possible to find in the publications surveyed. To solve the GMM-model, in this paper a multi-objective evolutionary algorithm (MOEA) inspired by the Strength Pareto Evolutionary Algorithm (SPEA) is proposed. Experimental results considering up to 11 different objectives are presented for the well-known NSF network, with two simultaneous data flows
Resumo:
Caveolins are a crucial component of caveolae but have also been localized to the Golgi complex, and, under some experimental conditions, to lipid bodies (LBs). The physiological relevance and dynamics of LB association remain unclear. We now show that endogenous caveolin-1 and caveolin-2 redistribute to LBs in lipid loaded A431 and FRT cells. Association with LBs is regulated and reversible; removal of fatty acids causes caveolin to rapidly leave the lipid body. We also show by subcellular fractionation, light and electron microscopy that during the first hours of liver regeneration, caveolins show a dramatic redistribution from the cell surface to the newly formed LBs. At later stages of the regeneration process (when LBs are still abundant), the levels of caveolins in LBs decrease dramatically. As a model system to study association of caveolins with LBs we have used brefeldin A (BFA). BFA causes rapid redistribution of endogenous caveolins to LBs and this association was reversed upon BFA washout. Finally, we have used a dominant negative LB-associated caveolin mutant (cavDGV) to study LB formation and to examine its effect on LB function. We now show that the cavDGV mutant inhibits microtubule-dependent LB motility and blocks the reversal of lipid accumulation in LBs.
Resumo:
[spa] El objetivo de este trabajo es analizar si los municipios españoles se ajustan en presencia de un shock presupuestario y (si es así) qué elementos del presupuesto son los que realizan el ajuste. La metodología utilizada para contestar estas preguntas es un mecanismo de corrección del error, VECM, que estimamos con un panel de datos de los municipios españoles durante el período 1988-2006. Nuestros resultados confirman que, en primer lugar, los municipios se ajustan en presencia de un shock fiscal (es decir, el déficit es estacionario en el largo plazo). En segundo lugar, obtenemos que cuando el shock afecta a los ingresos el ajuste lo soporta principalmente el municipio reduciendo el gasto, las transferencias tienen un papel muy reducido en este proceso de ajuste. Por el contrario, cuando el shock afecta al gasto, el ajuste es compartido en términos similares entre el municipio – incrementado los impuestos – y los gobiernos de niveles superiores – incrementando las transferencias. Estos resultados sugieren que la viabilidad de las finanzas pública locales es factible con diferentes entornos institucionales.
Resumo:
Whereas numerical modeling using finite-element methods (FEM) can provide transient temperature distribution in the component with enough accuracy, it is of the most importance the development of compact dynamic thermal models that can be used for electrothermal simulation. While in most cases single power sources are considered, here we focus on the simultaneous presence of multiple sources. The thermal model will be in the form of a thermal impedance matrix containing the thermal impedance transfer functions between two arbitrary ports. Eachindividual transfer function element ( ) is obtained from the analysis of the thermal temperature transient at node ¿ ¿ after a power step at node ¿ .¿ Different options for multiexponential transient analysis are detailed and compared. Among the options explored, small thermal models can be obtained by constrained nonlinear least squares (NLSQ) methods if the order is selected properly using validation signals. The methods are applied to the extraction of dynamic compact thermal models for a new ultrathin chip stack technology (UTCS).
Resumo:
Leakage detection is an important issue in many chemical sensing applications. Leakage detection hy thresholds suffers from important drawbacks when sensors have serious drifts or they are affected by cross-sensitivities. Here we present an adaptive method based in a Dynamic Principal Component Analysis that models the relationships between the sensors in the may. In normal conditions a certain variance distribution characterizes sensor signals. However, in the presence of a new source of variance the PCA decomposition changes drastically. In order to prevent the influence of sensor drifts the model is adaptive and it is calculated in a recursive manner with minimum computational effort. The behavior of this technique is studied with synthetic signals and with real signals arising by oil vapor leakages in an air compressor. Results clearly demonstrate the efficiency of the proposed method.
Resumo:
[spa] El objetivo de este trabajo es analizar si los municipios españoles se ajustan en presencia de un shock presupuestario y (si es así) qué elementos del presupuesto son los que realizan el ajuste. La metodología utilizada para contestar estas preguntas es un mecanismo de corrección del error, VECM, que estimamos con un panel de datos de los municipios españoles durante el período 1988-2006. Nuestros resultados confirman que, en primer lugar, los municipios se ajustan en presencia de un shock fiscal (es decir, el déficit es estacionario en el largo plazo). En segundo lugar, obtenemos que cuando el shock afecta a los ingresos el ajuste lo soporta principalmente el municipio reduciendo el gasto, las transferencias tienen un papel muy reducido en este proceso de ajuste. Por el contrario, cuando el shock afecta al gasto, el ajuste es compartido en términos similares entre el municipio – incrementado los impuestos – y los gobiernos de niveles superiores – incrementando las transferencias. Estos resultados sugieren que la viabilidad de las finanzas pública locales es factible con diferentes entornos institucionales.
Resumo:
This empirical work applies a duration model to the study of factors determining privatization of local water services. I assess how factors determining privatization decision evolve as time goes by. A sample of 133 Spanish municipalities during the six terms of office taken place during the 1980-2002 period is analyzed. A dynamic neighboring effect is hypothesized and successfully tested. In a first stage, private water supply firms may try to expand to regions where there is no service privatized, in order to spread over this region after having being installed thanks to its scale advantages. Other factors influencing privatization decision evolve during the two decades under study, from the priority to fix old infrastructures to the concern about service efficiency. Some complementary results regarding political and budgetary factors are also obtained
Resumo:
A phase-field model for dealing with dynamic instabilities in membranes is presented. We use it to study curvature-driven pearling instability in vesicles induced by the anchorage of amphiphilic polymers on the membrane. Within this model, we obtain the morphological changes reported in recent experiments. The formation of a homogeneous pearled structure is achieved by consequent pearling of an initial cylindrical tube from the tip. For high enough concentration of anchors, we show theoretically that the homogeneous pearled shape is energetically less favorable than an inhomogeneous one, with a large sphere connected to an array of smaller spheres.
Resumo:
A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized by an elevated level of clustering compared to a random graph (although not extreme) and can be markedly non-local.
Resumo:
We consider a Potts model diluted by fully frustrated Ising spins. The model corresponds to a fully frustrated Potts model with variables having an integer absolute value and a sign. This model presents precursor phenomena of a glass transition in the high-temperature region. We show that the onset of these phenomena can be related to a thermodynamic transition. Furthermore, this transition can be mapped onto a percolation transition. We numerically study the phase diagram in two dimensions (2D) for this model with frustration and without disorder and we compare it to the phase diagram of (i) the model with frustration and disorder and (ii) the ferromagnetic model. Introducing a parameter that connects the three models, we generalize the exact expression of the ferromagnetic Potts transition temperature in 2D to the other cases. Finally, we estimate the dynamic critical exponents related to the Potts order parameter and to the energy.
Resumo:
The self-intermediate dynamic structure factor Fs(k,t) of liquid lithium near the melting temperature is calculated by molecular dynamics. The results are compared with the predictions of several theoretical approaches, paying special attention to the Lovesey model and the Wahnstrm and Sjgren mode-coupling theory. To this end the results for the Fs(k,t) second memory function predicted by both models are compared with the ones calculated from the simulations.