950 resultados para finite mixture models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many dynamic revenue management models divide the sale period into a finite number of periods T and assume, invoking a fine-enough grid of time, that each period sees at most one booking request. These Poisson-type assumptions restrict the variability of the demand in the model, but researchers and practitioners were willing to overlook this for the benefit of tractability of the models. In this paper, we criticize this model from another angle. Estimating the discrete finite-period model poses problems of indeterminacy and non-robustness: Arbitrarily fixing T leads to arbitrary control values and on the other hand estimating T from data adds an additional layer of indeterminacy. To counter this, we first propose an alternate finite-population model that avoids this problem of fixing T and allows a wider range of demand distributions, while retaining the useful marginal-value properties of the finite-period model. The finite-population model still requires jointly estimating market size and the parameters of the customer purchase model without observing no-purchases. Estimation of market-size when no-purchases are unobservable has rarely been attempted in the marketing or revenue management literature. Indeed, we point out that it is akin to the classical statistical problem of estimating the parameters of a binomial distribution with unknown population size and success probability, and hence likely to be challenging. However, when the purchase probabilities are given by a functional form such as a multinomial-logit model, we propose an estimation heuristic that exploits the specification of the functional form, the variety of the offer sets in a typical RM setting, and qualitative knowledge of arrival rates. Finally we perform simulations to show that the estimator is very promising in obtaining unbiased estimates of population size and the model parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Here I develop a model of a radiative-convective atmosphere with both radiative and convective schemes highly simplified. The atmospheric absorption of radiation at selective wavelengths makes use of constant mass absorption coefficients in finite width spectral bands. The convective regime is introduced by using a prescribed lapse rate in the troposphere. The main novelty of the radiative-convective model developed here is that it is solved without using any angular approximation for the radiation field. The solution obtained in the purely radiation mode (i. e. with convection ignored) leads to multiple equilibria of stable states, being very similar to some results recently found in simple models of planetary atmospheres. However, the introduction of convective processes removes the multiple equilibria of stable states. This shows the importance of taking convective processes into account even for qualitative analyses of planetary atmosphere

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Functionally relevant large scale brain dynamics operates within the framework imposed by anatomical connectivity and time delays due to finite transmission speeds. To gain insight on the reliability and comparability of large scale brain network simulations, we investigate the effects of variations in the anatomical connectivity. Two different sets of detailed global connectivity structures are explored, the first extracted from the CoCoMac database and rescaled to the spatial extent of the human brain, the second derived from white-matter tractography applied to diffusion spectrum imaging (DSI) for a human subject. We use the combination of graph theoretical measures of the connection matrices and numerical simulations to explicate the importance of both connectivity strength and delays in shaping dynamic behaviour. Our results demonstrate that the brain dynamics derived from the CoCoMac database are more complex and biologically more realistic than the one based on the DSI database. We propose that the reason for this difference is the absence of directed weights in the DSI connectivity matrix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Whereas numerical modeling using finite-element methods (FEM) can provide transient temperature distribution in the component with enough accuracy, it is of the most importance the development of compact dynamic thermal models that can be used for electrothermal simulation. While in most cases single power sources are considered, here we focus on the simultaneous presence of multiple sources. The thermal model will be in the form of a thermal impedance matrix containing the thermal impedance transfer functions between two arbitrary ports. Eachindividual transfer function element ( ) is obtained from the analysis of the thermal temperature transient at node ¿ ¿ after a power step at node ¿ .¿ Different options for multiexponential transient analysis are detailed and compared. Among the options explored, small thermal models can be obtained by constrained nonlinear least squares (NLSQ) methods if the order is selected properly using validation signals. The methods are applied to the extraction of dynamic compact thermal models for a new ultrathin chip stack technology (UTCS).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have systematically analyzed six different reticular models with quenched disorder and no thermal fluctuations exhibiting a field-driven first-order phase transition. We have studied the nonequilibrium transition, appearing when varying the amount of disorder, characterized by the change from a discontinuous hysteresis cycle (with one or more large avalanches) to a smooth one (with only tiny avalanches). We have computed critical exponents using finite size scaling techniques and shown that they are consistent with universal values depending only on the space dimensionality d.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is shown that the world volume field theory of a single D3-brane in a supergravity D3-brane background admits finite energy, and non-singular, Abelian monopoles and dyons preserving 1/2 or 1/4 of the N=4 supersymmetry and saturating a Bogomolnyi-type bound. The 1/4 supersymmetric solitons provide a world volume realization of string-junction dyons. We also discuss the dual M-theory realization of the 1/2 supersymmetric dyons as finite tension self-dual strings on the M5-brane, and of the 1/4 supersymmetric dyons as their intersections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The stable co-existence of two haploid genotypes or two species is studied in a spatially heterogeneous environment submitted to a mixture of soft selection (within-patch regulation) and hard selection (outside-patch regulation) and where two kinds of resource are available. This is analysed both at an ecological time-scale (short term) and at an evolutionary time-scale (long term). At an ecological scale, we show that co-existence is very unlikely if the two competitors are symmetrical specialists exploiting different resources. In this case, the most favourable conditions are met when the two resources are equally available, a situation that should favour generalists at an evolutionary scale. Alternatively, low within-patch density dependence (soft selection) enhances the co-existence between two slightly different specialists of the most available resource. This results from the opposing forces that are acting in hard and soft regulation modes. In the case of unbalanced accessibility to the two resources, hard selection favours the most specialized genotype, whereas soft selection strongly favours the less specialized one. Our results suggest that competition for different resources may be difficult to demonstrate in the wild even when it is a key factor in the maintenance of adaptive diversity. At an evolutionary scale, a monomorphic invasive evolutionarily stable strategy (ESS) always exists. When a linear trade-off exists between survival in one habitat versus that in another, this ESS lies between an absolute adjustment of survival to niche size (for mainly soft-regulated populations) and absolute survival (specialization) in a single niche (for mainly hard-regulated populations). This suggests that environments in agreement with the assumptions of such models should lead to an absence of adaptive variation in the long term.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study numerically the out-of-equilibrium dynamics of the hypercubic cell spin glass in high dimensionalities. We obtain evidence of aging effects qualitatively similar both to experiments and to simulations of low-dimensional models. This suggests that the Sherrington-Kirkpatrick model as well as other mean-field finite connectivity lattices can be used to study these effects analytically.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to investigate the effect of cement paste quality on the concrete performance, particularly fresh properties, by changing the water-to-cementitious materials ratio (w/cm), type and dosage of supplementary cementitious materials (SCM), and airvoid system in binary and ternary mixtures. In this experimental program, a total matrix of 54 mixtures with w/cm of 0.40 and 0.45; target air content of 2%, 4%, and 8%; a fixed cementitious content of 600 pounds per cubic yard (pcy), and the incorporation of three types of SCMs at different dosages was prepared. The fine aggregate-to- total aggregate ratio was fixed at 0.42. Workability, rheology, air-void system, setting time, strength, Wenner Probe surface resistivity, and shrinkage were determined. The effects of paste variables on workability are more marked at the higher w/cm. The compressive strength is strongly influenced by the paste quality, dominated by w/cm and air content. Surface resistivity is improved by inclusion of Class F fly ash and slag cement, especially at later ages. Ternary mixtures performed in accordance with their ingredients. The data collected will be used to develop models that will be part of an innovative mix proportioning procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microsatellite loci mutate at an extremely high rate and are generally thought to evolve through a stepwise mutation model. Several differentiation statistics taking into account the particular mutation scheme of the microsatellite have been proposed. The most commonly used is R(ST) which is independent of the mutation rate under a generalized stepwise mutation model. F(ST) and R(ST) are commonly reported in the literature, but often differ widely. Here we compare their statistical performances using individual-based simulations of a finite island model. The simulations were run under different levels of gene flow, mutation rates, population number and sizes. In addition to the per locus statistical properties, we compare two ways of combining R(ST) over loci. Our simulations show that even under a strict stepwise mutation model, no statistic is best overall. All estimators suffer to different extents from large bias and variance. While R(ST) better reflects population differentiation in populations characterized by very low gene-exchange, F(ST) gives better estimates in cases of high levels of gene flow. The number of loci sampled (12, 24, or 96) has only a minor effect on the relative performance of the estimators under study. For all estimators there is a striking effect of the number of samples, with the differentiation estimates showing very odd distributions for two samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As modern molecular biology moves towards the analysis of biological systems as opposed to their individual components, the need for appropriate mathematical and computational techniques for understanding the dynamics and structure of such systems is becoming more pressing. For example, the modeling of biochemical systems using ordinary differential equations (ODEs) based on high-throughput, time-dense profiles is becoming more common-place, which is necessitating the development of improved techniques to estimate model parameters from such data. Due to the high dimensionality of this estimation problem, straight-forward optimization strategies rarely produce correct parameter values, and hence current methods tend to utilize genetic/evolutionary algorithms to perform non-linear parameter fitting. Here, we describe a completely deterministic approach, which is based on interval analysis. This allows us to examine entire sets of parameters, and thus to exhaust the global search within a finite number of steps. In particular, we show how our method may be applied to a generic class of ODEs used for modeling biochemical systems called Generalized Mass Action Models (GMAs). In addition, we show that for GMAs our method is amenable to the technique in interval arithmetic called constraint propagation, which allows great improvement of its efficiency. To illustrate the applicability of our method we apply it to some networks of biochemical reactions appearing in the literature, showing in particular that, in addition to estimating system parameters in the absence of noise, our method may also be used to recover the topology of these networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electrical deep brain stimulation (DBS) is an efficient method to treat movement disorders. Many models of DBS, based mostly on finite elements, have recently been proposed to better understand the interaction between the electrical stimulation and the brain tissues. In monopolar DBS, clinically widely used, the implanted pulse generator (IPG) is used as reference electrode (RE). In this paper, the influence of the RE model of monopolar DBS is investigated. For that purpose, a finite element model of the full electric loop including the head, the neck and the superior chest is used. Head, neck and superior chest are made of simple structures such as parallelepipeds and cylinders. The tissues surrounding the electrode are accurately modelled from data provided by the diffusion tensor magnetic resonance imaging (DT-MRI). Three different configurations of RE are compared with a commonly used model of reduced size. The electrical impedance seen by the DBS system and the potential distribution are computed for each model. Moreover, axons are modelled to compute the area of tissue activated by stimulation. Results show that these indicators are influenced by the surface and position of the RE. The use of a RE model corresponding to the implanted device rather than the usually simplified model leads to an increase of the system impedance (+48%) and a reduction of the area of activated tissue (-15%).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Granular flow phenomena are frequently encountered in the design of process and industrial plants in the traditional fields of the chemical, nuclear and oil industries as well as in other activities such as food and materials handling. Multi-phase flow is one important branch of the granular flow. Granular materials have unusual kinds of behavior compared to normal materials, either solids or fluids. Although some of the characteristics are still not well-known yet, one thing is confirmed: the particle-particle interaction plays a key role in the dynamics of granular materials, especially for dense granular materials. At the beginning of this thesis, detailed illustration of developing two models for describing the interaction based on the results of finite-element simulation, dimension analysis and numerical simulation is presented. The first model is used to describing the normal collision of viscoelastic particles. Based on some existent models, more parameters are added to this model, which make the model predict the experimental results more accurately. The second model is used for oblique collision, which include the effects from tangential velocity, angular velocity and surface friction based on Coulomb's law. The theoretical predictions of this model are in agreement with those by finite-element simulation. I n the latter chapters of this thesis, the models are used to predict industrial granular flow and the agreement between the simulations and experiments also shows the validation of the new model. The first case presents the simulation of granular flow passing over a circular obstacle. The simulations successfully predict the existence of a parabolic steady layer and show how the characteristics of the particles, such as coefficients of restitution and surface friction affect the separation results. The second case is a spinning container filled with granular material. Employing the previous models, the simulation could also reproduce experimentally observed phenomena, such as a depression in the center of a high frequency rotation. The third application is about gas-solid mixed flow in a vertically vibrated device. Gas phase motion is added to coherence with the particle motion. The governing equations of the gas phase are solved by using the Large eddy simulation (LES) and particle motion is predicted by using the Lagrangian method. The simulation predicted some pattern formation reported by experiment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main goal of this paper is to propose a convergent finite volume method for a reactionâeuro"diffusion system with cross-diffusion. First, we sketch an existence proof for a class of cross-diffusion systems. Then the standard two-point finite volume fluxes are used in combination with a nonlinear positivity-preserving approximation of the cross-diffusion coefficients. Existence and uniqueness of the approximate solution are addressed, and it is also shown that the scheme converges to the corresponding weak solution for the studied model. Furthermore, we provide a stability analysis to study pattern-formation phenomena, and we perform two-dimensional numerical examples which exhibit formation of nonuniform spatial patterns. From the simulations it is also found that experimental rates of convergence are slightly below second order. The convergence proof uses two ingredients of interest for various applications, namely the discrete Sobolev embedding inequalities with general boundary conditions and a space-time $L^1$ compactness argument that mimics the compactness lemma due to Kruzhkov. The proofs of these results are given in the Appendix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over 70% of the total costs of an end product are consequences of decisions that are made during the design process. A search for optimal cross-sections will often have only a marginal effect on the amount of material used if the geometry of a structure is fixed and if the cross-sectional characteristics of its elements are property designed by conventional methods. In recent years, optimalgeometry has become a central area of research in the automated design of structures. It is generally accepted that no single optimisation algorithm is suitable for all engineering design problems. An appropriate algorithm, therefore, mustbe selected individually for each optimisation situation. Modelling is the mosttime consuming phase in the optimisation of steel and metal structures. In thisresearch, the goal was to develop a method and computer program, which reduces the modelling and optimisation time for structural design. The program needed anoptimisation algorithm that is suitable for various engineering design problems. Because Finite Element modelling is commonly used in the design of steel and metal structures, the interaction between a finite element tool and optimisation tool needed a practical solution. The developed method and computer programs were tested with standard optimisation tests and practical design optimisation cases. Three generations of computer programs are developed. The programs combine anoptimisation problem modelling tool and FE-modelling program using three alternate methdos. The modelling and optimisation was demonstrated in the design of a new boom construction and steel structures of flat and ridge roofs. This thesis demonstrates that the most time consuming modelling time is significantly reduced. Modelling errors are reduced and the results are more reliable. A new selection rule for the evolution algorithm, which eliminates the need for constraint weight factors is tested with optimisation cases of the steel structures that include hundreds of constraints. It is seen that the tested algorithm can be used nearly as a black box without parameter settings and penalty factors of the constraints.